当在安全 - 关键系统中使用深层神经网络(DNN)时,工程师应确定在测试过程中观察到的与故障(即错误输出)相关的安全风险。对于DNN处理图像,工程师在视觉上检查所有引起故障的图像以确定它们之间的共同特征。这种特征对应于危害触发事件(例如,低照明),这是安全分析的重要输入。尽管内容丰富,但这种活动却昂贵且容易出错。为了支持此类安全分析实践,我们提出了SEDE,该技术可为失败,现实世界图像中的共同点生成可读的描述,并通过有效的再培训改善DNN。 SEDE利用了通常用于网络物理系统的模拟器的可用性。它依靠遗传算法来驱动模拟器来生成与测试集中诱导失败的现实世界图像相似的图像。然后,它采用规则学习算法来得出以模拟器参数值捕获共同点的表达式。然后,派生表达式用于生成其他图像以重新训练和改进DNN。随着DNN执行车载传感任务,SEDE成功地表征了导致DNN精度下降的危险触发事件。此外,SEDE启用了重新培训,从而导致DNN准确性的显着提高,最高18个百分点。
translated by 谷歌翻译
Deep neural networks (DNNs) have demonstrated superior performance over classical machine learning to support many features in safety-critical systems. Although DNNs are now widely used in such systems (e.g., self driving cars), there is limited progress regarding automated support for functional safety analysis in DNN-based systems. For example, the identification of root causes of errors, to enable both risk analysis and DNN retraining, remains an open problem. In this paper, we propose SAFE, a black-box approach to automatically characterize the root causes of DNN errors. SAFE relies on a transfer learning model pre-trained on ImageNet to extract the features from error-inducing images. It then applies a density-based clustering algorithm to detect arbitrary shaped clusters of images modeling plausible causes of error. Last, clusters are used to effectively retrain and improve the DNN. The black-box nature of SAFE is motivated by our objective not to require changes or even access to the DNN internals to facilitate adoption.Experimental results show the superior ability of SAFE in identifying different root causes of DNN errors based on case studies in the automotive domain. It also yields significant improvements in DNN accuracy after retraining, while saving significant execution time and memory when compared to alternatives. CCS Concepts: • Software and its engineering → Software defect analysis; • Computing methodologies → Machine learning.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
It is well known that the performance of any classification model is effective if the dataset used for the training process and the test process satisfy some specific requirements. In other words, the more the dataset size is large, balanced, and representative, the more one can trust the proposed model's effectiveness and, consequently, the obtained results. Unfortunately, large-size anonymous datasets are generally not publicly available in biomedical applications, especially those dealing with pathological human face images. This concern makes using deep-learning-based approaches challenging to deploy and difficult to reproduce or verify some published results. In this paper, we suggest an efficient method to generate a realistic anonymous synthetic dataset of human faces with the attributes of acne disorders corresponding to three levels of severity (i.e. Mild, Moderate and Severe). Therefore, a specific hierarchy StyleGAN-based algorithm trained at distinct levels is considered. To evaluate the performance of the proposed scheme, we consider a CNN-based classification system, trained using the generated synthetic acneic face images and tested using authentic face images. Consequently, we show that an accuracy of 97,6\% is achieved using InceptionResNetv2. As a result, this work allows the scientific community to employ the generated synthetic dataset for any data processing application without restrictions on legal or ethical concerns. Moreover, this approach can also be extended to other applications requiring the generation of synthetic medical images. We can make the code and the generated dataset accessible for the scientific community.
translated by 谷歌翻译
胆道是一个管网络,将肝脏与胆囊连接到胆囊,这是一个正下方的器官。胆管是胆汁树中的主要管。胆管的扩张是人体中更多主要问题的关键指标,例如石头和肿瘤,这些问题通常是由胰腺或Vater的乳头状引起的。在许多情况下,胆管扩张的检测对于初学者或未经训练的医务人员来说可能具有挑战性。即使是专业人士也无法用肉眼检测到胆管扩张。这项研究提出了一种基于视觉的独特模型,用于初始诊断。为了从磁共振图像分割胆道树,框架使用了不同的图像处理方法(MRI)。在对图像的感兴趣区域进行了细分后,对其进行了许多计算,以提取10个特征,包括主要轴和次要轴,胆管区域,胆汁树面积,紧凑性和某些纹理特征(对比度,平均值,方差和相关性)。这项研究使用了约旦安曼国王侯赛因医学中心的图像数据库,其中包括200张MRI图像,100例正常病例和100例胆管扩张的患者。提取特征后,使用各种分类器来确定患者的健康状况(正常或扩张)。研究结果表明,提取的特征在曲线下的准确性和面积方面与所有分类器都很好。这项研究的独特之处在于,它使用自动方法从MRI图像中分割胆汁树,并且科学地将检索到的特征与胆道树状态相关联,而文献中从未做过。
translated by 谷歌翻译
在实际执行或基准测试之前预测生产代码的性能是高度挑战的。在本文中,我们提出了一个被称为TEP-GNN的预测模型,该模型表明,对于预测单位测试执行时间的特殊情况,高准确性的性能预测是可能的。 Tep-gnn使用FA-asts或流动的ASTS作为基于图的代码表示方法,并使用强大的图形神经网络(GNN)深度学习模型预测测试执行时间。我们基于从项目公共存储库中开采的922个测试文件,使用四个现实生活中的Java开源程序评估TEP-GNN。我们发现我们的方法达到了0.789的较高的Pearson相关性,表现优于基线深度学习模型。但是,我们还发现,训练有素的模型需要更多的工作来概括看不见的项目。我们的工作表明,FA-asts和GNN是预测绝对性能值的可行方法,并作为能够在执行前预测任意代码的性能的重要中介步骤。
translated by 谷歌翻译
环绕视图相机是用于自动驾驶的主要传感器,用于近场感知。它是主要用于停车可视化和自动停车的商用车中最常用的传感器之一。四个带有190 {\ deg}视场覆盖车辆周围360 {\ deg}的鱼眼相机。由于其高径向失真,标准算法不容易扩展。以前,我们发布了第一个名为Woodscape的公共鱼眼环境视图数据集。在这项工作中,我们发布了环绕视图数据集的合成版本,涵盖了其许多弱点并扩展了它。首先,不可能获得像素光流和深度的地面真相。其次,为了采样不同的框架,木景没有同时注释的所有四个相机。但是,这意味着不能设计多相机算法以在新数据集中启用的鸟眼空间中获得统一的输出。我们在Carla模拟器中实现了环绕式鱼眼的几何预测,与木观的配置相匹配并创建了Synwoodscape。
translated by 谷歌翻译
视觉关注估计是不同学科的十字路口的一个积极的研究领域:计算机视觉,人工智能和医学。估计表示关注的显着图的最常见方法之一是基于观察到的图像。在本文中,我们表明可以从EEG采集中检索视觉注意力。结果与观察到的图像的传统预测相当,这具有很大的兴趣。为此目的,已经记录了一组信号,并且已经开发出不同的模型来研究视觉关注与大脑活动之间的关系。结果令人鼓舞,与其他方式的其他方法令人鼓舞,与其他方式相比。本文考虑的代码和数据集已在\ URL {https://figshare.com/s/3e353bd1c621962888AD}中提供,以促进该领域的研究。
translated by 谷歌翻译
情感估计是一个积极的研究领域,对人与计算机之间的互动产生了重要影响。在评估情绪的不同方式中,代表电脑活动的脑电图(EEG)在过去十年中呈现了激励结果。 EEG的情感估计可以有助于某些疾病的诊断或康复。在本文中,我们提出了一种考虑到专家定义的生理学知识,与最初致力于计算机视觉的新型深度学习(DL)模型。具有模型显着性分析的联合学习得到了增强。为了呈现全局方法,该模型已经在四个公共可用数据集中进行了评估,并实现了与TheS-of TheakeS的方法和优于两个所提出的数据集的结果,其具有较低标准偏差的较高的稳定性。为获得再现性,本文提出的代码和模型可在Github.com/vdelv/emotion-eeg中获得。
translated by 谷歌翻译
准确估计深度信息的能力对于许多自主应用来识别包围环境并预测重要对象的深度至关重要。最近使用的技术之一是单眼深度估计,其中深度图从单个图像推断出深度图。本文提高了自我监督的深度学习技术,以进行准确的广义单眼深度估计。主要思想是训练深层模型要考虑不同帧的序列,每个帧都是地理标记的位置信息。这使得模型能够增强给定区域语义的深度估计。我们展示了我们模型改善深度估计结果的有效性。该模型在现实环境中受过培训,结果显示在将位置数据添加到模型训练阶段之后的深度图中的改进。
translated by 谷歌翻译